Goto

Collaborating Authors

 rbf network


A Robust Prototype-Based Network with Interpretable RBF Classifier Foundations

Saralajew, Sascha, Rana, Ashish, Villmann, Thomas, Shaker, Ammar

arXiv.org Artificial Intelligence

Prototype-based classification learning methods are known to be inherently interpretable. However, this paradigm suffers from major limitations compared to deep models, such as lower performance. This led to the development of the so-called deep Prototype-Based Networks (PBNs), also known as prototypical parts models. In this work, we analyze these models with respect to different properties, including interpretability. In particular, we focus on the Classification-by-Components (CBC) approach, which uses a probabilistic model to ensure interpretability and can be used as a shallow or deep architecture. We show that this model has several shortcomings, like creating contradicting explanations. Based on these findings, we propose an extension of CBC that solves these issues. Moreover, we prove that this extension has robustness guarantees and derive a loss that optimizes robustness. Additionally, our analysis shows that most (deep) PBNs are related to (deep) RBF classifiers, which implies that our robustness guarantees generalize to shallow RBF classifiers. The empirical evaluation demonstrates that our deep PBN yields state-of-the-art classification accuracy on different benchmarks while resolving the interpretability shortcomings of other approaches. Further, our shallow PBN variant outperforms other shallow PBNs while being inherently interpretable and exhibiting provable robustness guarantees.


Kolmogorov-Arnold Networks are Radial Basis Function Networks

Li, Ziyao

arXiv.org Artificial Intelligence

This short paper is a fast proof-of-concept that the 3-order B-splines used in Kolmogorov-Arnold Networks (KANs) can be well approximated by Gaussian radial basis functions. Doing so leads to FastKAN, a much faster implementation of KAN which is also a radial basis function (RBF) network.


An Enhanced Analysis of Traffic Intelligence in Smart Cities Using Sustainable Deep Radial Function

Ismaeel, Ayad Ghany, Mary, S. J. Jereesha, Anitha, C., Logeshwaran, Jaganathan, Mahmood, Sarmad Nozad, Alani, Sameer, Shather, Akram H.

arXiv.org Artificial Intelligence

Smart cities have revolutionized urban living by incorporating sophisticated technologies to optimize various aspects of urban infrastructure, such as transportation systems. Effective traffic management is a crucial component of smart cities, as it has a direct impact on the quality of life of residents and tourists. Utilizing deep radial basis function (RBF) networks, this paper describes a novel strategy for enhancing traffic intelligence in smart cities. Traditional methods of traffic analysis frequently rely on simplistic models that are incapable of capturing the intricate patterns and dynamics of urban traffic systems. Deep learning techniques, such as deep RBF networks, have the potential to extract valuable insights from traffic data and enable more precise predictions and decisions. In this paper, we propose an RBF based method for enhancing smart city traffic intelligence. Deep RBF networks combine the adaptability and generalization capabilities of deep learning with the discriminative capability of radial basis functions. The proposed method can effectively learn intricate relationships and nonlinear patterns in traffic data by leveraging the hierarchical structure of deep neural networks. The deep RBF model can learn to predict traffic conditions, identify congestion patterns, and make informed recommendations for optimizing traffic management strategies by incorporating these rich and diverse data To evaluate the efficacy of our proposed method, extensive experiments and comparisons with real world traffic datasets from a smart city environment were conducted. In terms of prediction accuracy and efficiency, the results demonstrate that the deep RBF based approach outperforms conventional traffic analysis methods. Smart city traffic intelligence is enhanced by the model capacity to capture nonlinear relationships and manage large scale data sets.


On the universal approximation property of radial basis function neural networks

Ismayilova, Aysu, Ismayilov, Muhammad

arXiv.org Artificial Intelligence

In this paper we consider a new class of RBF (Radial Basis Function) neural networks, in which smoothing factors are replaced with shifts. We prove under certain conditions on the activation function that these networks are capable of approximating any continuous multivariate function on any compact subset of the $d$-dimensional Euclidean space. For RBF networks with finitely many fixed centroids we describe conditions guaranteeing approximation with arbitrary precision.


Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks

Neural Information Processing Systems

A high performance speaker-independent isolated-word hybrid speech rec(cid:173) ognizer was developed which combines Hidden Markov Models (HMMs) and Radial Basis Function (RBF) neural networks. In recognition ex(cid:173) periments using a speaker-independent E-set database, the hybrid rec(cid:173) ognizer had an error rate of 11.5% compared to 15.7% for the robust unimodal Gaussian HMM recognizer upon which the hybrid system was based. These results and additional experiments demonstrate that RBF networks can be successfully incorporated in hybrid recognizers and sug(cid:173) gest that they may be capable of good performance with fewer parameters than required by Gaussian mixture classifiers. A global parameter opti(cid:173) mization method designed to minimize the overall word error rather than the frame recognition error failed to reduce the error rate. A hybrid isolated-word speech recognizer was developed which combines neural network and Hidden Markov Model (HMM) approaches.


Boosting the Performance of RBF Networks with Dynamic Decay Adjustment

Neural Information Processing Systems

Radial Basis Function (RBF) Networks, also known as networks of locally-tuned processing units (see [6]) are well known for their ease of use. Most algorithms used to train these types of net(cid:173) works, however, require a fixed architecture, in which the number of units in the hidden layer must be determined before training starts. The RCE training algorithm, introduced by Reilly, Cooper and Elbaum (see [8]), and its probabilistic extension, the P-RCE algorithm, take advantage of a growing structure in which hidden units are only introduced when necessary. The nature of these al(cid:173) gorithms allows training to reach stability much faster than is the case for gradient-descent based methods. Unfortunately P-RCE networks do not adjust the standard deviation of their prototypes individually, using only one global value for this parameter.


NeuroScale: Novel Topographic Feature Extraction using RBF Networks

Neural Information Processing Systems

Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have tra(cid:173) ditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the gener(cid:173) alisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural net(cid:173) work models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent'self-regularising' behaviour of the'NEUROSCALE' architecture. Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996).


New Research on Radial Basis functions part4(Machine Learning)

#artificialintelligence

Abstract: In biomechanics, geometries representing complicated organic structures are consistently segmented from sparse volumetric data or morphed from template geometries resulting in initial overclosure between adjacent geometries. In FEA, these overclosures result in numerical instability and inaccuracy as part of contact analysis. Several techniques exist to fix overclosures, but most suffer from several drawbacks. This work introduces a novel automated algorithm in an iterative process to remove overclosure and create a desired minimum gap for 2D and 3D finite element models. The RBF Network algorithm was introduced by its four major steps to remove the initial overclosure.


Machine Learning and AI: Support Vector Machines in Python

#artificialintelligence

Support Vector Machines (SVM) are one of the most powerful machine learning models around, and this topic has been one that students have requested ever since I started making courses. These days, everyone seems to be talking about deep learning, but in fact there was a time when support vector machines were seen as superior to neural networks. One of the things you'll learn about in this course is that a support vector machine actually is a neural network, and they essentially look identical if you were to draw a diagram. The toughest obstacle to overcome when you're learning about support vector machines is that they are very theoretical. This theory very easily scares a lot of people away, and it might feel like learning about support vector machines is beyond your ability.


Radial Basis Function. Radial basis function is derived from…

#artificialintelligence

Radial basis function is derived from Cover's Theorem of Separability of Patterns: "A complex pattern classification problem, cast in a high dimensional space non linearly is more likely to be linearly separable then in a low dimensional space provided that the space is not densely populated". Hidden neuron activations in RBFN are computed using an exponential of a distance measure (Euclidean distance) between input vectors and prototype vectors associated with hidden neurons. RBFN was originally introduced for the purpose of interpolation of data points on a finite training set T {Xk, dk } k 1 Q . Then solving the exact interpolation problem, we have to search for a map "f" such that f(x) dk k 1 to Q. There are 3 functions for RBF used in ML: 1. Gaussian functions 2. Multiquadric 3. Inverse multiquadric When deciding whether to use an RBF network or an MLP, there are several factors to consider.